home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Tech Arsenal 1
/
Tech Arsenal (Arsenal Computer).ISO
/
tek-05
/
landtut1.zip
/
LANDTUT1.DOC
< prev
Wrap
Text File
|
1991-04-28
|
71KB
|
1,436 lines
THE LAN TUTORIAL SERIES -- by Aaron Brenner
PART 1: Buying a LAN
A Definition.......................1
Why Buy a LAN?.....................1
LAN Components.....................2
Application Specific...............3
PART 2: PROTOCOLS
Definition.........................5
Protocols, Really..................5
The OSI Model......................6
Popular Physical Protocols.........7
Data Link Protocols................9
Transport Protocols...............10
TCP/IP............................10
Many More.........................11
PART 3: (PROTOCOLS: continued)
Data Link Protocols...............13
Transport Protocols...............14
Many More.........................15
PART 4: LAN Access Methods
Definition........................17
Ethernet..........................17
Arcnet............................18
Token Ring........................19
PART 5: LAN Interface Cards
Definition........................20
Preparing for Transmission........21
TRANSMISSION......................23
Encoding/decoding.................23
Cable access. ....................24
Handshaking. .....................24
Transmission/reception............24
PICK A CARD...................... 24
PART 1: Buying a LAN
This short article kicks off LAN Magazine's new series of
"clip-and-save" tutorials about LANs. Each month we will print an
easy-to-read tutorial -- aimed at users new to networking --
covering one aspect of LAN purchase, installation and management.
This first tutorial is a very basic introduction to the issues
involved in buying a LAN. Along the way is an overview of the
components of a LAN and a list of the next 12 topics to be
covered.
A year from now, if you clip carefully, you should have a short,
easy-to- understand introductory pamphlet about the principles of
local area networking.
A Definition
A LAN is a data communications network spanning a limited
geographical area, a few miles at most. It allows users to share
information and computer resources, including mass data storage,
backup facilities, software, printers, plotters and processors.
Typically, a LAN is made up of network interface cards (circuit
boards) that fit inside the connected computers, cable to connect
these computers, protocol software to move data from computer to
computer, user interface software to connect user and network,
and operating system software to actually service users' needs
for things like files and printers.
Why Buy a LAN?
LANs require a certain mind set, something different from
traditional MIS (Management of Information Services) thinking.
Once a LAN is installed, things like initiative, democracy,
participation, communication and independence take over.
Hierarchy, dependence, regulation and isolation are thrown out
the window.
If you have the right mind set, the four best reasons to buy a
LAN are:
* Communication. A LAN connects the people in your company. Once
connected, every possible form of discourse is possible, from
electronic "yellow sticky things" to formal legal briefs. People
like to communicate.
* Democracy. A LAN distributes your company's computer resources
to everyone connected. Once the LAN is installed, everyone from
mail clerk to CEO will want, and should have, access.
* Productivity. A LAN's ability to share computer resources and
information easily helps people do their jobs quickly,efficiently
and with less hassle. The LAN will quickly become the heart of your
business. It's the heart of ours.
Page 1
* Savings. A LAN saves money by allowing users to share expensive
computer resources -- printers, plotters, hard disks, WORM drives,
CPUs, software, etc.
If you don't have the right mind set, the four best reasons not
to buy a LAN are:
* Communication. Connecting all the people in your company might
let them talk to each other. Who knows, they might plot your
overthrow.
* Democracy. Distributing resources will give everyone in the
company a measure of power. Who knows, they might not do what you
tell them.
* Productivity. Doing the job in new and better ways might lead to
the elimination of dull, tedious work. Who knows, it might mean
the elimination of your job.
* Savings. Sharing expensive computer resources saves money. Who
knows, you might have to save money all the time.
LAN Components
Buying and installing a LAN is not simple. There are many things
to think about. Here are 12 that cover the basics of LAN purchase
and installation.
1. The OSI Model , which stands for the Open Systems Interconnection
model of the International Standards Organization, is a useful
categorization of the different parts of a LAN. It is an overview of
how a network works.
2. The Access Method is the way the network arbitrates which device
may use the cable and for how long. It is necessary since devices
can't talk at the same time. Different access methods provide
different network performance and reliability.
3. The Interface Card is the device that connects the computer to
the cable. These vary by type, size, speed and much more.
Performance is a key issue.
4. The Cabling is the physical connection between networked devices.
Fiber optic, coaxial and twisted-pair are the main choices. Each has
advantages and drawbacks.
5. LAN Protocols are software that run in the computer and on the
network interface card. They provide the means for shipping data
between devices. Certain sets of protocols are good for certain
applications. Which you choose depends upon what you use your
network for.
Page 2
6. The LAN Operating System is the software that resides in the
computer. It provides the interface between the user or application
and the network. The key issues here are performance, compatibility
and ease of use.
7. The File Server stores and distributes program and data files to
be shared by users on the network. It is a hardware/software
combination heavily dependent on the LAN operating system and the
type of work you are doing.
8. Network Printing allows many users to share one or more printing
devices. Some LAN operating systems do it better than others.
Sometimes you'll need special network print utilities.
9. Tape backup, done regularly, maintains data integrity on a LAN
by recording data on tape instead of disk. Key issues include
capacity, speed, compatibility and ease of use.
10. LAN Security covers the methods used to protect data from
corruption by unknowing users, accidents and intruders. These include
physical security, encryption and passwords. But the type of security
you use depends mostly on the type of work you are doing on your
network.
11. Bridges and gateways connect networks. Each uses different methods
with different results. Bridges connect networks at a lower level than
gateways, making them more versatile. On the other hand, gateways
connect networks that bridges can't. Performance and compatiblity are
the key issues.
12. LAN Management is the name given to the best job in the world:
taking care of the network. Different LANs provide different levels of
management to make the job easier. Your level of skill and confidence
will be crucial here.
Application Specific
The type of LAN you buy depends primarily on the work it will do.
Before evaluating different vendor options, assess your company's
computer needs and resources, present and future. As much as possible,
conserve your present computing power, even if you plan to upgrade.
Will the network be used mostly to share peripherals like printers
and hard disks? If so, access methods and performance are less
important than reliability and ease of use. Will the network be used
mostly for large database access? If so, performance is paramount.
Will the network be used mostly for communications and electronic
mail? If so, wide-range standards compatibility may be the most
important issue in your decision.
Usually, you want the network to do everything: start out peripheral
sharing, add databases then connect to mainframes. Thus, growth
potential and standards are very important to your decision, since
you're laying a foundation upon which you will build.
Page 3
Other overall considerations include: the education of users
(beginners and experts); the types of computers you are connecting
(PCs, minicomputers and/or mainframes); and the amount of money you
have (lots or a little.)
Unfortunately, no LAN does everything. Vendors make compromises,
sacrificing ease of use for performance, performance for compatibility
or vice versa. Since this is the case, get to know exactly what you
want before you buy.
Page 4
PART 2: PROTOCOLS
Definition
The LAN Magazine "Glossary of LAN Terms" defines a protocol this
way: A set of rules for communicating between computers. Protocols
govern format, timing, sequencing and error control. Without these
rules, the computer will not make sense of the stream of incoming
bits.
But there is more. Communicating data from computer to computer
takes many steps. For example, suppose you are sending a file from one
computer to another. The file has to be broken into pieces. The pieces
have to be grouped in certain fashion. Information must be added to
tell the receiver where each group belongs in relation to others.
Timing information must be added. Error correcting information must be
added, and so on.
Because of this complexity, computer communication is usually broken
down into steps. Each step has its own rules of operation, its own
protocol. These steps must be executed in a certain order, usually
from the top down on transmission and from the bottom up on reception.
Because of this hierarchical arrangement, the term protocol stack is
used to describe the different steps of computer communication. A
protocol stack is simply a set of rules for communication, only it can
be broken down into sets of rules for each step in the sequence.
Protocols, Really
What is a protocol, really? It is software that resides either in
a computer's memory or in the memory of a transmission device like a
network interface card. When data is ready for transmission, this
software is executed. It prepares data for transmission and sets it in
motion. At the receiving end, it takes the data off the wire and
prepares it for the computer, taking off all the information added by
the transmitting end. So, protocols are just software that performs
data transmission.
But there is more. Confusion is caused by the fact that there are
many protocols, many different ways of getting data from one place to
another. Novell does it one way. 3Com does it another. DEC does it a
third way. And since the transmitter and the receiver have to "speak"
the same protocol, these three can't talk directly to each other.
That's where the term protocol standard and the OSI Model fit in.
A protocol standard is a set of rules for computer communication
that has been widely agreed upon and implemented by many vendors,
users and standards bodies. Ideally, a protocol standard should,
when implemented, allow people to talk to each other, even if they are
using equipment from different vendors.
Of course, you don't have to have a "standard" protocol to communicate.
You can make up your own. The only problem is that you are limited to
talking to yourself.
Page 5
Let's look at some of the protocol standards that exist and see if we
can't get a feel for how protocols work. As you will see, there are
many standards -- none of which can be called universal.
The OSI Model
The OSI Model is the best place to start because it is a full protocol
stack. It is a set of protocols that attempt to define and standardize
the entire process of data communications (some protocol standards
only define part of the process). The OSI Model -- which stands for
the Open Systems Interconnection Model of the International Standards
Organization (ISO) -- has the support of most major computer and
network vendors, along with many large customers and the U.S.
government.
The OSI Model is really nothing more than a concept, describing how
data communications should take place. It divides the process into
seven layers. Into these layers fit protocol standards developed by
the ISO and by other standards bodies. At each layer, there are
numerous protocols. That is, the OSI is not a single definition of
how data communications actually takes place in the real world. It
just says, "This is the way things should be divided and these are
the protocols that you can use at each layer." As long as a network
vendor chooses one of the protocols at each layer, the network should
work with other vendors' offerings.
Nobody really believes the hype that the OSI Model will lead to
complete, transparent intercommunication between all computers. We
are just hoping it is a step in the right direction. Each successive
layer of the OSI Model works with the one below it. Remember, protocol
stacks are not democratic; they are rigidly hierarchical. Each layer
of the OSI Model is modular. That is, you may (theoretically)
substitute one protocol for another at the same layer without affecting
the operation of layers above or below. For example, you should be
able to use a Token Ring board or an Ethernet board and still use all
the other pieces of your network, including network operating system,
transport protocols, internetwork protocols, applications interfaces,
etc. Of course, vendors must create these products to the OSI Model
specifications for this to work.
The OSI Model's modularity should become clear as we describe the
major protocols that conform to it. First a look at what each layer is
supposed to do.
1. Physical Layer. The first, or Physical layer, of the OSI Model
conveys the bits that move along the cable. It is responsible for
making sure that the raw bits get from one place to another, no matter
what shape they are in. It deals with the mechanical and electrical
characteristics of the cable.
Page 6
2. Data Link Layer. The second, or Data Link, layer of the OSI Model
is responsible for getting data packaged and onto the network cable.
It manages the physical transfer, providing the blocks of data, their
synchronization, error control and flow control. The Data Link layer
is often divided into two parts -- Logical Link Control (LLC) and
Medium Access Control (MAC) -- depending on the implementation.
3. Network Layer. The third, or Network, layer of the OSI Model
establishes, maintains and terminates connections. It is responsible
for translating logical addresses, or names, into physical addresses.
4. Transport Layer. The fourth, or Tranport, layer of the OSI Model
ensures data is sent successfully between the two computers. If data
is sent incorrectly, this layer has the responsibility to ask for
retransmission.
5. Session Layer. The fifth, or Session, layer of the OSI Model
decides when to turn communication on and off between two computers.
It coordinates the interaction between them. Unlike the network layer,
it is dealing with the programs running in each machine to establish
conversations between them.
6. Presentation Layer. The sixth, or Presentation, layer of the OSI
Model does code conversion and data reformatting. It is the translator
of the network, making sure the computer is talking in the right
language for the network.
7. Application Layer. The seventh and final, or Application, layer of
the OSI Model is the interface between the software running in the
computer and the network. It supplies functions to the software in
the computer, like electronic mail or file transfer.
Unfortunately, protocols in the real world do not conform precisely
to these neat definitions. Some network products combine layers.
Others leave out layers. Still others break apart layers. But no matter
what, all working network products achieve the same result, getting
data from here to there. The question is, do they do it in a way
compatible with the rest of the world's networks? More important, do
they care?
Popular Physical Protocols
Hopefully, all of this will become clearer if we look at some real
protocols and compare them to the OSI Model. The best known physical
layer standards of the OSI Model (there are a few), are those from the
IEEE, the Institute of Electrical and Electronic Engineers. That is,
the ISO adopted some of the IEEE's physical network standards as part
of its OSI Model. These are IEEE 802.3, or Ethernet, IEEE 802.4, or
token- passing bus and IEEE 802.5, or Token Ring.
Page 7
These three standards define the physical characteristics of the
network and how to get raw data from one place to another. Each is a
Layer 1 standard. They also define how people can use the network at
the same time without bumping into each other. Technically, this last
part is a job for the Data Link layer, Layer 2. We will deal with
this below. For now, let's see just what these standards mean.
IEEE 802.3 defines a physical network that has a bus (straight line)
layout. Data is broadcast throughout the network in no particular
physical direction. All machines receive every broadcast, but only
those meant to receive the data respond with an acknowledgement.
Network access is determined by a protocol called Carrier Sense
Multiple Access With Collision Detection, or CSMA/CD. It lets everyone
send whenever they want. If they bump into each other, they back off,
wait, and send again until they get through. Thus, the more users, the
more crowded and slower the network -- like the freeway. (More on
network access next month).
IEEE 802.4 defines a physical network that has a bus layout. It is
also a broadcast network. All machines receive all data but do not
respond unless data is addressed to them.
Network access is determined by a token that moves around the network
in a logical fashion. It is broadcast to every machine but only the
machine that is next for the token gets it. Once a machine has the
token, and not before or after, it may transmit data. The MAP/TOP
(Manufacturing Automation Protocol/Technical Office Protocol) standard
uses this protocol.
IEEE 802.5 defines a physical network that has a ring layout. Data
moves around the ring from station to station. Each station regenerates
the signal from the previous station. In this way it is not a broadcast
network.
The network access protocol is token-passing. The difference is that
the token moves about in a ring, rather than over a bus. IBM, Texas
Instruments and Ungermann-Bass are the only vendors of the chips needed
to make Token Ring network interface cards. Nevertheless, it is fast
becoming one of the most popular network hardware options.
There are other Physical and Data Link layer standards, some that
conform to the OSI Model and others that don't. The most famous that
does not is Arcnet. It uses a token-passing bus access method, but not
the same one as IEEE 802.4. A new physical standard called Fiber
Distributed Data Interface (FDDI) is a 100M-bits-per-second physical
protocol using token ring over fiber optic cable. It will probably be
OSI-compatible.
Page 8
Data Link Protocols
As we said, the IEEE protocol standards are not confined to the
Physical layer but also work at the Data Link layer. We also said
that the Data Link layer is often divided into two parts. The upper
part is called Logical Link Control (LLC) and the lower part is called
Medium Access Control (MAC). As it turns out, the IEEE standards
define the lower, or MAC, half of the Data Link layer -- the part that
determines how network users keep from bumping into each other.
Medium Access Control is just what it sounds like. It is the protocol
that determines which computer gets to use the network cable when many
computers are trying. We saw that IEEE 802.3 lets everyone simply bump
into each other and keep trying until they get through. IEEE 802.4 and
802.5 are more ordered, limiting conversation to the computer with the
token. Remember, all of this is done in fractions of a second. So even
when the network is crowded, no one really waits very long for access
on any of the three types of networks.
The other half of the Data Link layer, LLC, provides reliable data
transfer over the physical link. In essence, it manages the physical
link.
There are two reasons why the IEEE split the Data Link layer in half
(and why the ISO accepted it). First of all, the Data Link layer has
two jobs to do. The first is to coordinate the physical transfer of
data. The second is to manage access to the physical medium. Splitting
the job allows for more modularity, and therefore flexibility.
The second reason also has to do with modularity, but in a different
way. The type of Medium Access Control has more to do with the physical
requirements of the network than actually managing the transfer of
data. In other words, the MAC layer is "closer" to the physical layer
than the LLC layer. By splitting the two, it is possible to create a
number of MAC layers (corresponding to physical layers) and just one
LLC layer that can handle them all. This increases the flexibility of
the standard. It also gives LLC an important role in providing an
interface between the various MAC layers and the higher-layer protocols.
By the way, Logical Link Control is the more common name of the IEEE's
802.2 specification. The numbers give it away. 802.2 works with 802.3,
802.4 and 802.5. It should also work with emerging standards, like FDDI.
There are other protocols that perform the LLC functions. High-level
Data Link Control (HDLC) is the protocol from the ISO. Like LLC, it
conforms to the OSI model. IBM's SDLC (Synchronous Data Link Control)
is a Data Link layer standard that does not conform to the OSI Model
but does perform similar functions. IBM has many products that do not
follow the OSI Model or its hierarchical setup. IBM has pledged support
of OSI, however.
Page 9
Transport Protocols
The ISO is in the process of establishing protocol standards for the
middle layers of the OSI Model. As of yet, none of these have been
implemented on a widespread basis, nor has the complete OSI protocol
stack been established. To make matters more confusing, most of the
middle-layer protocols on the market today do not conform neatly to
the OSI Model's network, transport and session layers. They were
created before the ISO started work on the model.
The good news is many existing protocols are being incorporated into
the OSI Model. Where existing protocols are not incorporated, interfaces
between them and the OSI Model are being implemented. This is the case
for TCP/IP, NetBIOS and APPC, the major middle-layer protocols available
today.
In the PC LAN environment, NetBIOS is the most important protocol. It
stands for Network Basic Input/Output System. IBM developed it as a
BIOS for networks. It is essentially a Session layer (Layer 5) protocol
that acts as an applications interface to the network. It provides the
tools for a program to establish a session with another program over
the network. Hundreds of programs have been written to this interface,
making it the most widespread protocol in the PC network arena.
NetBIOS does not obey the rules of the OSI Model in that it does not
talk only to the layers above and below it. As we said, programs can
talk directly to NetBIOS, skipping the application and presentation
layers. This doesn't keep NetBIOS from doing its job. It just makes it
incompatible with the OSI Model, which is not the end of the world.
Someone will write an interface between the two, soon.
NetBIOS is limited to working on one network. Therefore, some network
vendors have established an interface between NetBIOS and TCP/IP, a
protocol from the Department of Defense for use over large combinations
of networks (internetworks).
TCP/IP
TCP/IP stands for Transmission Control Protocol/Internet Protocol.
TCP is a Transport protocol (Layer 4), corresponding to the definition
we gave above. Its job is to get data from one place to another without
errors. It forms an interface between the protocols above and below
-- shielding the upper layers from concern about the connection and
the lower layers from concern about transmission content.
The IP protocol is for getting data from one network to another.
Its main concern is bridging the differences between networks so
they don't have to be modified to talk to each other. It does
this by providing rules for the breakdown of data to conform with
a given network. Gateways, which are the physical translators
between networks, use IP's rules to take data from one network,
modify it and route it correctly over another network.
Page 10
TCP/IP enjoys enormous support in government, scientific and
academic internetworks. These computers use UNIX and other
large-computer operating systems. In the past few years, business
internetworks have begun to approach the size of those in
government and universities. This has driven these businesses to
look for internetwork protocol standards. They have found TCP/IP
useful and it has become a de facto standard. Many see it as an
interim solution until the OSI transport and internetwork
protocols are finished. TCP/IP products for DOS-based networked
PCs are also available.
Often when TCP/IP is discussed, acronyms like SMTP, FTP and
TELNET are tossed around. These are applications that have been
written for TCP/IP and are widely used. They work at the
Applications layer (Layer 7). SMTP stands for Simple Mail
Transfer Protocol. FTP stands for File Transfer Protocol. TELNET
is the name for a terminal emulation protocol. These protocols,
written for TCP/IP, do exactly what they say they do.
Advanced Program-to-Program Communications, or APPC, is another
protocol for large networks. It comes from IBM and is part of Big
Blue's Systems Network Architecture (SNA). It is similar to
NetBIOS in that it provides an interface to the network for
programs so they may communicate, but it is not limited to one
network as is NetBIOS. APPC is geared toward mainframe computers,
though IBM is offering it as part of its OS/2 Extended Edition.
Using APPC, all computers communicate as peers, even PCs.
Previously in the IBM world, PCs were forced to emulate terminals
when communicating with mainframes. A number of other vendors,
mini and micro, also offer APPC.
APPC has received much publicity. Unfortunately, there are not
many applications for APPC in the PC network arena. There are
more in the minicomputer and mainframe network market.
Nevertheless, IBM and others are promoting APPC as a protocol
standard for the future. Its robustness, flexibility and
reliability make it worth the extra development effort.
There are other middle-layer protocols. XNS, IPX and NetBUEI are
all transport protocols. XNS is short for Xerox Network System.
It was one of the first local area network protocols used on a
wide basis, mainly for Ethernet (802.3) networks. 3Com and many
others use it. IPX is Novell's implementation of XNS. It is not
completely compatible with the original, but very widely used.
NetBUEI is IBM's transport protocol for its PC networking
products. All of these protocols perform similar tasks.
Many More
If it seems like the number of protocols is idiotic, it is and it
isn't. Different protocols have different advantages in different
environments. No single protocol stack will work better than
every other in every setting. NetBIOS seems to work fantastically
in small PC networks but is practically useless for communicating
with mainframes. APPC works well in mainframe environments.
TCP/IP excels in large internetworks.
Page 11
On the other hand, much more is made about the differences in
protocols than is actually warranted. Proprietary protocols are
perfect solutions in many cases. Besides, if the proprietary
protocols are widespread enough, they become standards, and
gateways between them and other standards are built. This is
happening with some of the major protocols we have not covered.
These protocols include many de facto standards in minicomputer
and scientific workstation communications. They include DEC's
entire protocol suite, Sun Microsystems' NFS, AT&T's protocols
and many others. We have also left out Apple's AppleTalk and AFP.
While these enjoy widespread use, that use is based on the
computers these companies are selling and not the proliferation
of the protocols throughout the networking industry.
Unfortunately, whether proprietary or standard, users are still
faced with the dilemma of choice. This choice is made slightly
easier by the shakeout and standardization that has occurred over
the past few years at the lower Physical and Data Link layers.
There are three choices, Token Ring, Ethernet or Arcnet. Right
now, the same is happening at the higher layers. Can you guess
which way things will go?
Page 12
PART 3: (PROTOCOLS: continued)
Data Link Protocols
As we said last month, the IEEE protocol standards are not
confined to the Physical layer but also work at the Data Link
layer. We also said that the Data Link layer is often divided
into two parts. The upper part is called Logical Link Control
(LLC) and the lower part is called Medium Access Control (MAC).
As it turns out, the IEEE standards define the lower, or MAC,
half of the Data Link layer -- the part that determines how
network users keep from bumping into each other.
Medium Access Control is just what it sounds like. It is the
protocol that determines which computer gets to use the network
cable when many computers are trying. We saw that IEEE 802.3 lets
everyone simply bump into each other and keep trying until they
get through. IEEE 802.4 and 802.5 are more ordered, limiting
conversation to the computer with the token. Remember, all of
this is done in fractions of a second. So even when the network
is crowded, no one really waits very long for access on any of
the three types of networks.
The other half of the Data Link layer, LLC, provides reliable
data transfer over the physical link. In essence, it manages the
physical link.
There are two reasons why the IEEE split the Data Link layer in
half (and why the ISO accepted it). First of all, the Data Link
layer has two jobs to do. The first is to coordinate the physical
transfer of data. The second is to manage access to the physical
medium. Splitting the job allows for more modularity, and
therefore flexibility.
The second reason also has to do with modularity, but in a
different way. The type of Medium Access Control has more to do
with the physical requirements of the network than actually
managing the transfer of data. In other words, the MAC layer is
"closer" to the physical layer than the LLC layer. By splitting
the two, it is possible to create a number of MAC layers
(corresponding to physical layers) and just one LLC layer that
can handle them all. This increases the flexibility of the
standard. It also gives LLC an important role in providing an
interface between the various MAC layers and the higher-layer
protocols.
By the way, Logical Link Control is the more common name of the
IEEE's 802.2 specification. The numbers give it away. 802.2 works
with 802.3, 802.4 and 802.5. It should also work with emerging
standards, like FDDI.
There are other protocols that perform the LLC functions.
High-level Data Link Control (HDLC) is the protocol from the ISO.
Like LLC, it conforms to the OSI model. IBM's SDLC (Synchronous
Data Link Control) is a Data Link layer standard that does not
conform to the OSI Model but does perform similar functions. IBM
has many products that do not follow the OSI Model or its
hierarchical setup. IBM has pledged support of OSI, however.
Page 13
Transport Protocols
The ISO is in the process of establishing protocol standards for
the middle layers of the OSI Model. As of yet, none of these have
been implemented on a widespread basis, nor has the complete OSI
protocol stack been established. To make matters more confusing,
most of the middle-layer protocols on the market today do not
conform neatly to the OSI Model's network, transport and session
layers. They were created before the ISO started work on the
model.
The good news is many existing protocols are being incorporated
into the OSI Model. Where existing protocols are not
incorporated, interfaces between them and the OSI Model are being
implemented. This is the case for TCP/IP, NetBIOS and APPC, the
major middle-layer protocols available today.
In the PC LAN environment, NetBIOS is the most important
protocol. It stands for Network Basic Input/Output System. IBM
developed it as a BIOS for networks. It is essentially a Session
layer (Layer 5) protocol that acts as an applications interface
to the network. It provides the tools for a program to establish
a session with another program over the network. Hundreds of
programs have been written to this interface, making it the most
widespread protocol in the PC network arena.
NetBIOS does not obey the rules of the OSI Model in that it does
not talk only to the layers above and below it. As we said,
programs can talk directly to NetBIOS, skipping the application
and presentation layers. This doesn't keep NetBIOS from doing its
job. It just makes it incompatible with the OSI Model, which is
not the end of the world. Someone will write an interface between
the two, soon.
NetBIOS is limited to working on one network. Therefore, some
network vendors have established an interface between NetBIOS and
TCP/IP, a protocol from the Department of Defense for use over
large combinations of networks (internetworks).
TCP/IP stands for Transmission Control Protocol/Internet
Protocol. TCP is a Transport protocol (Layer 4), corresponding to
the definition we gave above. Its job is to get data from one
place to another without errors. It forms an interface between
the protocols above and below -- shielding the upper layers from
concern about the connection and the lower layers from concern
about transmission content.
The IP protocol is for getting data from one network to another.
Its main concern is bridging the differences between networks so
they don't have to be modified to talk to each other. It does
this by providing rules for the breakdown of data to conform with
a given network. Gateways, which are the physical translators
between networks, use IP's rules to take data from one network,
modify it and route it correctly over another network.
TCP/IP enjoys enormous support in government, scientific and
academic internetworks. These computers use UNIX and other
Page 14
large-computer operating systems. In the past few years, business
internetworks have begun to approach the size of those in
government and universities. This has driven these businesses to
look for internetwork protocol standards. They have found TCP/IP
useful and it has become a de facto standard. Many see it as an
interim solution until the OSI transport and internetwork
protocols are finished. TCP/IP products for DOS-based networked
PCs are also available.
Often when TCP/IP is discussed, acronyms like SMTP, FTP and
TELNET are tossed around. These are applications that have been
written for TCP/IP and are widely used. They work at the
Applications layer (Layer 7). SMTP stands for Simple Mail
Transfer Protocol. FTP stands for File Transfer Protocol. TELNET
is the name for a terminal emulation protocol. These protocols,
written for TCP/IP, do exactly what they say they do.
Advanced Program-to-Program Communications, or APPC, is another
protocol for large networks. It comes from IBM and is part of Big
Blue's Systems Network Architecture (SNA). It is similar to
NetBIOS in that it provides an interface to the network for
programs so they may communicate, but it is not limited to one
network as is NetBIOS. APPC is geared toward mainframe computers,
though IBM is offering it as part of its OS/2 Extended Edition.
Using APPC, all computers communicate as peers, even PCs.
Previously in the IBM world, PCs were forced to emulate terminals
when communicating with mainframes. A number of other vendors,
mini and micro, also offer APPC.
APPC has received much publicity. Unfortunately, there are not
many applications for APPC in the PC network arena. There are
more in the minicomputer and mainframe network market.
Nevertheless, IBM and others are promoting APPC as a protocol
standard for the future. Its robustness, flexibility and
reliability make it worth the extra development effort.
There are other middle-layer protocols. XNS, IPX and NetBUEI are
all transport protocols. XNS is short for Xerox Network System.
It was one of the first local area network protocols used on a
wide basis, mainly for Ethernet (802.3) networks. 3Com and many
others use it. IPX is Novell's implementation of XNS. It is not
completely compatible with the original, but very widely used.
NetBUEI is IBM's transport protocol for its PC networking
products. All of these protocols perform similar tasks.
Many More
If it seems like the number of protocols is idiotic, it is and it
isn't. Different protocols have different advantages in different
environments. No single protocol stack will work better than
every other in every setting. NetBIOS seems to work fantastically
in small PC networks but is practically useless for communicating
with mainframes. APPC works well in mainframe environments.
TCP/IP excels in large internetworks.
Page 15
On the other hand, much more is made about the differences in
protocols than is actually warranted. Proprietary protocols are
perfect solutions in many cases. Besides, if the proprietary
protocols are widespread enough, they become standards, and
gateways between them and other standards are built. This is
happening with some of the major protocols we have not covered.
These protocols include many de facto standards in minicomputer
and scientific workstation communications. They include DEC's
entire protocol suite, Sun Microsystems' NFS, AT&T's protocols
and many others. We have also left out Apple's AppleTalk and AFP.
While these enjoy widespread use, that use is based on the
computers these companies are selling and not the proliferation
of the protocols throughout the networking industry.
Unfortunately, whether proprietary or standard, users are still
faced with the dilemma of choice. This choice is made slightly
easier by the shakeout and standardization that has occurred over
the past few years at the lower Physical and Data Link layers.
There are three choices, Token Ring, Ethernet or Arcnet. Right
now, the same is happening at the higher layers. Can you guess
which way things will go?
Page 16
PART 4: LAN Access Methods
Definition
Access method is the term given to the set of rules by which
networks arbitrate their use. It is the way the LAN keeps people
from crashing into each other as they use the network. Think of
the access method as traffic law. The network cable is the
street. Traffic law (access method) regulates the use of the
street (cable), saying who can drive (send data) where and at
what time.
Access method deals on the Physical layer of the network, the
lowest level of the OSI model. That's because it is worried about
the use of the cable that connects users. The access method
doesn't care what is being sent over the network, just like the
traffic law doesn't stipulate what you can carry. It just says
you have to drive on the right and obey the lights and signs.
Networks need access methods for the same reason streets need
traffic lights - - to keep people from hitting each other. On a
network, if two or more people try to send data at exactly the
same time, their signals will interfere with each other, ruining
the data being transmitted. The access method prevents this.
There are three major access methods in use today, though many
more exist. They are Ethernet, Arcnet and Token Ring. Actually,
these are wider-ranging standards that use particular access
methods. They also define other features of network transmission
besides the access method, like the electrical characteristics of
signals, the size of data packets sent, etc. Nevertheless, these
three standards are best known (and best distinguished) for the
access methods they employ.
Ethernet
Ethernet is the most common network access method. It was
developed by Xerox Corporation at its Palo Alto Research Center
facility in the mid-1970s. It is supported by Xerox, Digital
Equipment, Intel (the three of whom made it a standard) and many
other network vendors. At least half of the installed base of
network nodes (PCs, engineering workstations, minicomputers) use
Ethernet.
The Ethernet access method is Carrier Sense Multiple Access with
Collision Detection, or CSMA/CD. This is a broadcast access
method. That means every computer "hears" every transmission.
However, not every computer "listens" to every transmission.
Here's how it works.
When a computer wants to send a message it does, no questions
asked. The signal it sends moves up and down the cable in every
direction, passing every other computer on the network. Every
computer "hears" the message, but ignores it. Only the computer
to which the message is addressed recognizes the message and
Page 17
sends an acknowledgement. The message is recognized because it
contains the address of the destination computer. The
acknowledgement can be correctly addressed because the original
message also contained the message of the sending computer.
What happens if two computers send at the same time? A collision.
This doesn't make any noise, but it does keep the messages from
going through. When it does happen, each of the colliding
computers backs off for a random amount of time and tries again.
This happens until they get through. Of course, the whole process
takes a small fraction of a second.
Computers can tell if a collision has occurred because they don't
"hear" their own message in a given amount of time, determined by
the "propagation delay" of the network (the time it takes for a
signal to go to the end of the network and back). Remember,
messages move up and down the network in all directions. Every
computer hears every message, even its own messages. That is the
Carrier Sense Multiple Access with Collision Detection access
method.
It would seem Ethernet is an inefficient access method, prone to
collisions. But while collisions do happen often, they don't mean
very much in most cases. Since the whole
transmission/collision/retransmission process takes place so
quickly, the delay a collision causes is minuscule. Of course, if
you have lots of traffic, from lots of computers, the number of
collisions can mount and the network can slow down. This happens
with some large-scale imaging applications or on Ethernet network
segments with more than 50 to 75 nodes. Few Ethernet networks,
however, have a traffic load of more than 10 to 20 per cent,
which means delay caused by collisions is unnoticeable.
Arcnet
Arcnet was developed by Datapoint Corporation (San Antonio, TX)
in the early 1970s. The main Arcnet hardware vendors in the PC
network arena today are Datapoint, Standard Microsystems
(Happauge, NY) and Pure Data (Markham, Ontario). After Ethernet,
Arcnet is the most installed network access method, supported by
most network software vendors.
Arcnet is a token passing access method that works on a star-bus
topology. That means the network cable is laid out as a series of
stars, with each computer attached to a "hub" as the center of
the star and the hubs connected in a bus, or line. Hubs can
connect four, eight, 16 or 32 computers.
When a computer wants to send on an Arcnet network, it must have
the "token." The token is simply a series of data bits created by
one of the computers on the network. (There is a whole process
for token creation that we need not go into). It moves around the
network in a given pattern, a logical ring. All computers on the
network are numbered with an address -- from 0 to 255, so the
maximum number of computers on an Arcnet segment is 256. The
token moves from computer to computer in numerical order, even if
adjacent numbers (e.g. 14 and 15) are at opposite ends of the
Page 18
network. When the token reaches the highest number on the network
it moves to the lowest, thus creating a logical ring.
Once a computer has the token it can send one packet of data --
up to 512 bytes. It does so by attaching the destination address,
its own address, up to 508 bytes of data and some other
information to the token. This combination becomes the packet.
The entire packet then moves from node to node in sequential
order until it reaches the destination node. There the data is
removed and the token released to the next node in order.
Since one packet is often not enough for an entire message, the
token may need to make several rounds of the network to complete
a message.
The advantage of token passing is predictability. Because the
token moves through the network in a determined path, it is
possible to calculate how long it will take for it to move around
the network. Since the token will only carry up to 508 bytes at a
time, it is possible to calculate how long different sized
transmissions will take. This makes network performance very
predictable. It also means introduction of new network nodes will
have a predictable effect. This differs from Ethernet, where the
addition of new nodes may or may not seriously effect
performance.
The disadvantage of the token passing access method is the fact
that each node acts as a repeater, accepting and regenerating the
token as it passes around the network in a specific pattern. If
there is a malfunctioning node, the token may be destroyed or
simply lost, bringing down the whole network. There are, however,
provisions for token regeneration so that a lost or destroyed
token is not gone forever. The star topology also helps.
Token Ring
The Token Ring network was introduced by IBM in 1984. It is not
the first ring network, but it has had the most impact on the LAN
industry. It has evolved into IBM's ultimate connectivity
solution for all its computers -- personal, mini and mainframe.
IBM's specifications follow those of the IEEE's (Institute of
Electrical and Electronic Engineers) 802.5 standard. The other
major Token Ring hardware vendors are Proteon (Natick, MA), 3Com
(Santa Clara, CA) and Ungermann-Bass (Santa Clara, CA). The
network software vendors that support Token Ring hardware include
3Com, Novell (Orem, UT) and Univation (Milpitas, CA). The
installed base of Token Ring should surpass that of Ethernet and
Arcnet soon.
Like Arcnet, Token Ring networks use token passing. The
difference is computers are arranged in a physical ring. The
token moves around the ring, giving successive computers the
right to transmit. If a computer receives an empty token it may
fill it with a message of any length as long as the time to send
does not exceed the token-holding timer. This message moves
around the network with each computer regenerating it. Only the
Page 19
receiving computer will copy the message into its memory, then
marking the message as received. It does not remove the message
from the ring. The sending computer does that when the message
comes back around.
Because each computer looks at the message and may act on it,
each computer can perform certain tests to make sure the message
is getting through correctly. Also, since the frame is copied and
marked rather than purged, the sending computer can see if the
destination computer exists and if the message was received when
the message comes back around.
Token Ring networks have a priority mechanism whereby certain
computers can get the token faster than others. They can also
hold it for longer.
Token Ring's advantages include reliability and ease of
maintenance. It uses a star-wired ring topology in which all
computers are directly wired to a multi-station access unit, or
hub, or in IBM terms Multi-Access Unit (MAU).
These are connected in a ring. The multi-station access unit
allows malfunctioning computers to be disconnected from the
network. This overcomes the disadvantage of token passing, namely
the way in which one malfunctioning computer can bring down the
network since all computers are active in regenerating the token
and passing signals around the ring. Malfunctioning computers are
simply disconnected by unplugging them from the multi-station
access unit.
Page 20
PART 5: LAN Interface Cards
Definition
The network interface card (NIC) is the piece of hardware that
fits inside your computer to provide the physical connection to
the network. Every computer attached to a LAN uses one sort of
network interface card or another. In most cases, the card fits
directly into the expansion bus of the computer. In some cases,
the card will be part of a separate unit to which the computer
attaches through a serial or parallel connection.
The interface card takes data from your PC, puts it into the
appropriate format and sends it over the cable to another LAN
interface card. This card receives the data, puts it into a form
the PC understands and sends it to the PC. Simple, right?
Wrong. To get one byte of data from here to there hundreds of
things must happen. Buffers must be checked. Requests must be
acknowledged. Sessions must be established. Tokens must be sent.
Collisions must be detected. The list can seem endless.
Luckily, the work of the interface card can be broken down into
eight tasks: host-card communications; buffering; packet
formation; parallel-serial conversion; encoding/decoding; cable
access; handshaking and transmission/reception. These are the
steps taken to get data from the memory of one computer to the
memory of another computer.
Preparing for Transmission
PC-NIC Communications. There are three ways to move data from the
PC's memory to the network card and back again: DMA, I/O mapping
and shared memory.
Shared memory is just that. Part of host memory is shared by the
network interface card's processor. This is a very fast method of
transfer, since no buffering on the card is required. Both the
card and the PC do their work on the data in the same place, so
no transfer is necessary.
DMA is short for Direct Memory Access. All Intel-based computers
come with something called a DMA controller. It takes care of the
transfer of data from an input/output device to the PC's main
memory so the PC's main microprocessor, or CPU, does not have to.
For DMA transfer, the controller or processor on the interface
card sends a signal to the CPU indicating it needs DMA. The CPU
then relinquishes control of the PC bus to the DMA controller.
(The bus is the piece of the computer that connects other parts,
e.g. memory and processor).
Once the DMA controller has command of the bus, it begins to take
the data from the card and place it directly in memory. It can do
this because it has been informed by the CPU of the appropriate
memory address at which to begin putting data in memory. After
all the data is in memory, the DMA controller returns control of
Page 21
the bus to the CPU and tells it how much data has been put in
memory. Of course, the whole process takes fractions of a second.
There are two types of I/O transfers depending on the PC and
peripheral. The two most important are memory-mapped I/O and
program I/O. In a memory-mapped I/O transfer, the host CPU
assigns some of its memory space to the I/O device, in this case
the network interface card. So out of the possible 640K bytes
available, some amount, say 12K bytes, are given to the network
card. This memory is then treated as if it were main memory. No
special instructions in the CPU are needed to get data from the
card since it is like taking data from one part of main memory to
another.
With program I/O, the CPU is given a set of special instructions
to handle the input/output functions. These instructions can be
built into the chip or come with software. To send data, a
request is sent from the network interface card to the CPU. The
CPU then moves the data from the card over the bus to main
memory.
Shared memory is the fastest method of moving data between the
network interface card and the PC, but it not often used for
expense and execution reasons. The advantage of DMA is that it
removes work from the CPU, so it can perform other functions
while data transfer is taking place. The disadvantage is the CPU
cannot access memory while the DMA controller is working. I/O
mapping doesn't remove work from the CPU and it also takes up
memory, but it can be faster than DMA.
Different NICs use different types of NIC-PC communications.
Yours will probably use DMA or program I/O. Experts disagree on
which is better. If you really need to tell the difference,
you'll have to experiment with your own applications.
Buffering. Most NICs use a buffer. The buffer is a storage place
that holds data as it is moving into and out of the NIC. The
purpose of the buffer is simple: to make up for inherent delays
in transmission. To do this, a buffer temporarily holds data,
either for transmission onto the network cable or for transfer
into the PC. While in the buffer, data may be acted on,
"packetized" or "depacketized," or it may simply sit while the
NIC handles other things.
A buffer is needed because some parts of data transfer are slower
than others. Data usually comes into the card faster than it can
be converted from serial or parallel, depacketized, read and
sent. This is true in both directions.
Some NICs do not have buffers. Instead they use PC RAM. This can
be less expensive, but usually slower. It also takes up precious
memory.
Packet formation. This is the most important job of the network
and it is almost always done by the NIC. Packets are the units of
transmission used on most LANs. Files and messages for
Page 22
transmission are broken up into packets as they are sent. At the
other end the packets are put together to reform the original
file or message.
A packet has three sections: header, data and trailer. The header
includes an alert to signal "packet on the way," the packet's
source address, destination address and clock information to
synchronize transmission. In some networks, headers also have
preambles used for various purposes, like setting up parameters
for transmission. They can also have a control field to direct
the packet through the network, a byte count and a message type
field.
The data section is just that, the data being sent, for example,
the numbers in a spreadsheet or words in a document. On some
networks, the data section of a packet is as large as 12K bytes.
On Ethernet, it is 4K bytes. Most networks fall between 1K and 4K
bytes.
The trailer has an error checking part called a cyclical
redundancy check (CRC). It is a number which is the result of a
mathematical calculation done on the packet by the sending NIC.
When the packet arrives at its destination, the same mathematical
calculation is repeated. If the result is the same, no errors
occurred in transmission -- all the ones and zeros were in the
right place. If the numbers don't match, an error occurred and
the packet is sent again. The trailer, like the header, can hold
other information.
TRANSMISSION
Parallel-serial conversion. Data comes from the PC in parallel
form, eight bits at a time. It must travel over the cable in
serial form, one bit at a time. And vice versa. Thus, the network
interface card must perform the conversion between the two forms.
Usually this is done by a controller on the card.
Encoding/decoding.
Once a packet is formed and changed from parallel to serial, it
is ready for sending over the line. To do this it must be
encoded; it must be converted into a series of electrical pulses
that convey information.
Most network interface cards use Manchester encoding. Serial data
is divided into things called bit periods. Each of these periods
(actual fractions of seconds) is divided in half. The two halves
together represent a bit. From the first half to the second half
of each bit period there is a change in the polarity of the
signal, from positive to negative, or vice versa. There must be a
change during each bit period because the change represents the
data. A change from negative to positive represents a 1. A change
from positive to negative represents a 0. Or vice versa depending
on the network.
Of course, it is these 1's and 0's that represent data. That's
how digital data is actually sent using electrical impulses.
Page 23
Cable access.
Before data can be sent, however, the network interface card must
have access to the cable. Not all cards can send at once. If they
do, their transmissions will collide and be lost.
The various access methods used were discussed last month. Token
Ring and Arcnet LANs use an electronic token to grant network
access. Ethernet lets any workstation transmit at will and then
look for collisions to see if it must transmit again. The entire
protocol for the access method, all the circuitry and firmware
(software written into hardware), resides on the network access
card. This is its main job, getting data onto the cable in
reliable form.
Handshaking.
After getting data from the PC, formatting it, encoding it and
getting cable access, the network interface card has just one
more task before it can send data: handshaking. In order to send
data successfully, there must be a second NIC card waiting to
receive it. To make sure, there is a short period of
communication between two cards before data is sent. During this
period, the parameters for the upcoming communication are decided
upon through negotiation.
During negotiation, the transmitting card sends the parameters it
wants to use. The receiving card answers with its parameters. The
card with the slower, smaller, less complicated parameters always
wins. That's because more sophisticated cards can "lower"
themselves while less sophisticated cards can't "raise"
themselves.
Negotiation sets the maximum packet size to be sent, how many
packets before an answer, timer values, acknowledge time outs
(how long to wait for an answer), buffer sizes, etc.
Transmission/reception.
Finally, everything is set. The only thing left is for the
transceiver to put the data on the cable. The transceiver gives
the data power to make it down the line. It actually puts the
electrical signal out over the cable, making sure the data can
get to the next card, the next repeater, amplifier or bridge.
At the other end, a transceiver is waiting to accept the signal
and begin the whole process in reverse, from modulated signal
through decoding, serial- parallel conversion and depacketizing
to PC-readable data.
PICK A CARD
Most people look only at performance when they choose a network
interface card. Some will also consider the access method and
topology used. This is fine when choosing the type of network
interface card. Ethernet tends to be better for bursty networks.
Its topology makes sense in scientific environments. Token Ring
and Arcnet do better with constant traffic. Their topologies are
better in offices. But there is more to choosing a network
Page 24
interface card than performance and access method.
The most important consideration in buying a network interface is
reliability. It doesn't matter how fast your network card is if
it doesn't work, causes errors, loses packets, drops the line,
etc. There is nothing more frustrating than having to isolate
network hardware problems. Moreover, once found, replacing a
network interface card means opening up a PC, setting dip
switches and possibly reconfiguring software. Look for a network
interface card that will work forever. This means talking to
users and installers.
Other considerations when buying a network interface card center
on your application. Figure out what kind of traffic your network
will be sending. Different topologies, different access methods
and different cabling schemes will guide your choice of network
interface card. So will the network operating system you plan on
using. But we'll get to that hairy subject in next month's
column.
-- by Aaron Brenner
Page 25